212 research outputs found

    Phonemes and Syllables in Speech Perception: size of the attentional focus in French.

    Get PDF
    A study by Pitt and Samuel (1990) found that English speakers could narrowly focus attention onto a precise phonemic position inside spoken words [1]. This led the authors to argue that the phoneme, rather than the syllable, is the primary unit of speech perception. Other evidence, obtained with a syllable detection paradigm, has been put forward to propose that the syllable is the unit of perception; yet, these experiments were ran with French speakers [2]. In the present study, we adapted Pitt & Samuel's phoneme detection experiment to French and found that French subjects behave exactly like English subjects: they too can focus attention on a precise phoneme. To explain both this result and the established sensitivity to the syllabic structure, we propose that the perceptual system automatically parses the speech signal into a syllabically-structured phonological representation

    Word recognition: do we need phonological representations?

    Get PDF
    Under what format(s) are spoken words memorized by the brain? Are word forms stored as abstract phonological representations? Or rather, are they stored as detailed acoustic-phonetic representations? (For example as a set of acoustic exemplars associated with each word). We present a series of experiments whose results point to the existence of prelexical phonological processes in word recognition and suggest that spoken words are accessed using a phonological code

    Neuronal bases of structural coherence in contemporary dance observation

    Get PDF
    The neuronal processes underlying dance observation have been the focus of an increasing number of brain imaging studies over the past decade. However, the existing literature mainly dealt with effects of motor and visual expertise, whereas the neural and cognitive mechanisms that underlie the interpretation of dance choreographies remained unexplored. Hence, much attention has been given to the Action Observation Network (AON) whereas the role of other potentially relevant neuro-cognitive mechanisms such as mentalizing (theory of mind) or language (narrative comprehension) in dance understanding is yet to be elucidated. We report the results of an fMRI study where the structural coherence of short contemporary dance choreographies was manipulated parametrically using the same taped movement material. Our participants were all trained dancers. The whole-brain analysis argues that the interpretation of structurally coherent dance phrases involves a subpart (Superior Parietal) of the AON as well as mentalizing regions in the dorsomedial Prefrontal Cortex. An ROI analysis based on a similar study using linguistic materials (Pallier et al. 2011) suggests that structural processing in language and dance might share certain neural mechanisms

    Phonological representations and repetition priming

    Get PDF
    An ubiquitous phenomenon in psychology is the `repetition effect': a repeated stimulus is processed better on the second occurrence than on the first. Yet, what counts as a repetition? When a spoken word is repeated, is it the acoustic shape or the linguistic type that matters? In the present study, we contrasted the contribution of acoustic and phonological features by using participants with different linguistic backgrounds: they came from two populations sharing a common vocabulary (Catalan) yet possessing different phonemic systems. They performed a lexical decision task with lists containing words that were repeated verbatim, as well as words that were repeated with one phonetic feature changed. The feature changes were phonemic, i.e. linguistically relevant, for one population, but not for the other. The results revealed that the repetition effect was modulated by linguistic, not acoustic, similarity: it depended on the subjects' phonemic system

    Perceptual adjustment to time-compressed Speech: a cross-linguistic study

    Get PDF
    revious research has shown that, when hearers listen to artificially speeded speech, their performance improves over the course of 10-15 sentences, as if their perceptual system was "adapting" to these fast rates of speech. In this paper, we further investigate the mechanisms that are responsible for such effects. In Experiment 1, we report that, for bilingual speakers of Catalan and Spanish, exposure to compressed sentences in either language improves performance on sentences in the other language. Experiment 2 reports that Catalan/Spanish transfer of performance occurs even in monolingual speakers of Spanish who do not understand Catalan. In Experiment 3, we study another pair of languages--namely, English and French--and report no transfer of adaptation between these two languages for English-French bilinguals. Experiment 4, with monolingual English speakers, assesses transfer of adaptation from French, Dutch, and English toward English. Here we find that there is no adaptation from French and intermediate adaptation from Dutch. We discuss the locus of the adaptation to compressed speech and relate our findings to other cross-linguistic studies in speech perception

    Epenthetic vowels in Japanese: A perceptual illusion?

    Get PDF
    In four cross-linguistic experiments comparing French and Japanese hearers, we found that the phonotactic properties of Japanese (very reduced set of syllable types) induce Japanese listeners to perceive ``illusory'' vowels inside consonant clusters in VCCV stimuli. In Experiments 1 and 2, we used a continuum of stimuli ranging from no vowel (e.g. ebzo) to a full vowel between the consonants (e.g. ebuzo). Japanese, but not French participants, reported the presence of a vowel [u] between consonants, even in stimuli with no vowel. A speeded ABX discrimination paradigm was used in Experiments 3 and 4, and revealed that Japanese participants had trouble discriminating between VCCV and VCuCV stimuli. French participants, in contrast had problems discriminating items that differ in vowel length (ebuzo vs. ebuuzo), a distinctive contrast in Japanese but not in French. We conclude that models of speech perception have to be revised to account for phonotactically-based assimilations

    EXPE: An expandable programming language for on-line psychological experiments

    Get PDF
    International audienceEXPE is a DOS program for the design and running of experiments that involve the presentation of audio or visual stimuli and the collection of on-line or off-line behavioral responses. Its flexibility makes it also a very useful tool for the rapid design of protocols for testing neu-ropsychological patients. EXPE provides a powerful scripting language which allows the user to specify all the components of an experiment in a human readable file. Subjects' responses are saved in a user-specified format, also in readable ASCII files. A remarkable feature of EXPE is that the user can easily add new commands to the language: all the instructions are calls to functions written in independent Borland Pascal units. Thus, users can link their own pascal procedures to EXPE to meet any special need. This makes it possible, for example, to adapt EXPE to new hardware, such as new sound or video boards

    Improving accuracy and power with transfer learning using a meta-analytic database

    Get PDF
    Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e. to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.Comment: MICCAI, Nice : France (2012

    Decoding Visual Percepts Induced by Word Reading with fMRI

    Get PDF
    International audienceWord reading involves multiple cognitive processes. To infer which word is being visualized, the brain first processes the visual percept, deciphers the letters, bigrams, and activates different words based on context or prior expectation like word frequency. In this contribution, we use supervised machine learning techniques to decode the first step of this processing stream using functional Magnetic Resonance Images (fMRI). We build a decoder that predicts the visual percept formed by four letter words, allowing us to identify words that were not present in the training data. To do so, we cast the learning problem as multiple classification problems after describing words with multiple binary attributes. This work goes beyond the identification or reconstruction of single letters or simple geometrical shapes and addresses a challenging estimation problem, that is the prediction of multiple variables from a single observation, hence facing the problem of learning multiple predictors from correlated inputs
    corecore